Ijraset Journal For Research in Applied Science and Engineering Technology
Authors: Deeksha ., Dr. Salma Firdose
DOI Link: https://doi.org/10.22214/ijraset.2025.74109
Certificate: View Certificate
Stress is a complex psychological and physiological state that exerts profound effects on human health, performance, and quality of life. With the growing incidence of stress-related disorders among students, employees, and healthcare populations, accurate and timely stress detection has emerged as a critical research priority. Traditional approaches, though valuable in controlled laboratory settings, are constrained by reliance on manual feature engineering, limited adaptability, and poor scalability in real-world applications. Recent advances in deep learning have transformed this field by enabling automated feature extraction and multimodal data integration, spanning physiological signals (EEG, ECG, GSR), behavioral modalities (facial expressions, speech, and text), and wearable sensor data. This review provides a comprehensive synthesis of existing studies on deep learning–based stress detection, systematically examining convolutional neural networks, recurrent networks, long short-term memory architectures, attention mechanisms, and hybrid models. Benchmark performance across widely used datasets such as WESAD, SWELL, DREAMER, and DEAP is critically compared using metrics including accuracy, precision, recall, F1-score, and latency. Beyond performance evaluation, this study highlights challenges in data scarcity, generalizability across populations, computational complexity, and ethical considerations related to privacy and bias. Finally, future research directions are outlined, emphasizing opportunities in real-time stress monitoring, multimodal fusion, transfer learning, and privacy preserving frameworks. This review aims to serve as a structured and authoritative reference for advancing deep learning applications in stress detection and mental health monitoring.
1. Understanding Stress
Stress arises when individuals perceive a gap between external demands and their coping abilities. While short-term (acute) stress can improve focus, long-term (chronic) stress is linked to mental and physical health issues, including anxiety, depression, and heart disease. Early detection is essential across sectors like education, healthcare, and workplaces.
2. Limitations of Traditional Methods
Traditional stress assessment techniques—such as questionnaires and basic physiological measures—are often subjective, not scalable, and unsuitable for dynamic, real-time monitoring. These shortcomings have driven interest in automated, AI-powered stress detection systems.
Key Advantages:
Automatic Feature Extraction: Deep learning models learn features directly from raw data.
Multimodal Capability: They can process physiological (EEG, ECG, GSR, HRV), behavioral (speech, facial expressions), and wearable sensor data.
Real-time Adaptation: Enable continuous monitoring in real-world environments.
Deep Learning Architectures:
CNNs (Convolutional Neural Networks):
Good for spatial data like facial expressions.
Limitations: Poor at modeling time-sequence data.
RNNs/LSTMs (Recurrent Neural Networks / Long Short-Term Memory):
Effective for temporal patterns in biosignals (e.g., ECG, EEG).
Limitations: Slower training, more resource-intensive.
Transformers (Attention Models):
Excellent for modeling long-range dependencies and integrating multimodal data.
Limitations: Require large datasets and high computational resources.
Hybrid models combining CNNs, LSTMs, and Transformers are promising for comprehensive stress detection.
Stress is detected using three main types of indicators:
Physiological: HRV, EDA, cortisol levels.
Behavioral: Speech changes, facial expressions, posture.
Psychological: Self-reported scales like PSS or STAI.
Stress is also categorized into acute, episodic, and chronic, each varying in duration and health impact.
Scope: 82 peer-reviewed studies from 2015–2024 focused on deep learning models applied to stress detection using physiological, behavioral, or multimodal data.
Databases Searched: Scopus, IEEE Xplore, Web of Science, PubMed, etc.
Inclusion Criteria: Use of deep learning (CNN, LSTM, Transformer), stress-specific, empirical results, peer-reviewed.
Benchmark datasets included: WESAD, SWELL, DREAMER, DEAP.
Architecture | Strengths | Weaknesses | Use Cases |
---|---|---|---|
CNNs | Extracts spatial features (e.g., facial cues) | Poor temporal modeling | Facial stress detection, posture |
LSTMs/RNNs | Captures temporal patterns in biosignals | Training inefficiencies | ECG, EEG, speech-based stress detection |
Transformers | Strong in multimodal fusion & attention | High computational demand | Real-time adaptive stress monitoring |
No single model is universally optimal; hybrid systems show the most promise.
Data Limitations: Small or imbalanced datasets limit generalizability.
Computational Demands: Resource-intensive models challenge real-time applications.
Ethical Issues: Privacy, fairness, and transparency need to be addressed.
Lab vs Real-world Data: Many studies rely on controlled data that may not reflect real-life stress patterns.
Explainable AI for transparent decision-making.
Federated & Transfer Learning to improve generalization across populations.
Real-time deployment in wearable or mobile devices.
Privacy-preserving frameworks to protect user data.
The growing prevalence of stress in modern societies has underscored the urgent need for reliable, scalable, and ethically responsible detection frameworks. This review has highlighted how deep learning approaches—leveraging convolutional, recurrent, and transformer-based architectures—offer significant improvements in stress recognition by capturing complex, multimodal patterns from physiological, behavioral, and contextual signals. Across experimental studies, these models have consistently outperformed traditional machine learning techniques, demonstrating their capacity to handle high-dimensional, nonlinear data and enable real-time monitoring in both controlled and real-world environments. Despite these advancements, the field remains evolving rather than mature. One of the key takeaways from this survey is that while deep learning has unlocked powerful predictive capabilities, it still faces persistent challenges in data scarcity, population diversity, privacy protection, and explainability. Without addressing these limitations, widespread adoption of stress detection systems in healthcare, education, workplace well-being, and personalized digital platforms will remain constrained. Moving forward, the balance between technical performance and human-centered considerations will be decisive. Future systems must not only achieve high accuracy but also ensure transparency, fairness, and accountability, especially in sensitive domains such as mental health. The integration of explainable AI, federated learning, and adaptive personalization strategies can help build trust while respecting user privacy. At the same time, deployment across wearable and IoT ecosystems demands models that are lightweight, computationally efficient, and accessible to diverse populations. In conclusion, deep learning represents a promising but still developing paradigm for stress detection. Its success will depend on reconciling algorithmic innovation with ethical safeguards, privacy-preserving frameworks, and user-oriented design. By addressing these critical intersections, the next generation of stress detection technologies can move beyond experimental prototypes toward becoming trusted, inclusive, and impactful tools in advancing mental health, well-being, and adaptive human– technology interaction.
[1] P. Schmidt, A. Reiss, R. Duerichen, C. Marberger, and K. Van Laerhoven, “Introducing WESAD: A Multimodal Dataset for Wearable Stress and Affect Detection,” in Proceedings of the 20th ACM International Conference on Multimodal Interaction (ICMI), Boulder, CO, USA, pp. 400–408, 2018. doi: 10.1145/3242969.3242985. [2] R. Tanwar, O. C. Phukan, G. Singh, and S. Tiwari, “CNN–LSTM Based Stress Recognition Using Wearables,” in Proceedings of the Knowledge Graph and Semantic Web Workshops (KGSWC), 2022. [3] K. Motaman, K. Alipour, B. Tarvirdizadeh, and M. Ghamari, “A Dilated CNN-Based Model for Stress Detection Using Raw PPG Signals,” IET Wireless Sensor Systems, vol. 15, no. 1, pp. 22–31, 2025. doi: 10.1049/wss2.12125. [4] H. Lee, J. Kim, B. Han, S. M. Park, and J. Chang, “Developing an Explainable Deep Neural Network for Stress Detection Using Biosignals and Human-Engineered Features,” Biomedical Signal Processing and Control, vol. 88, p. 106059, 2025. doi: 10.1016/j.bspc.2024.106059. [5] M. K. Moser, M. Ehrhart, and B. Resch, “An Explainable Deep Learning Approach for Stress Detection in Wearable Sensor Measurements,” Sensors, vol. 24, no. 3, p. 534, 2024. doi: 10.3390/s24030534. [6] M. Abd Al-Alim, R. Mubarak, N. M. Salem, and I. Sadek, “Deep Neural Network Based for Stress Detection,” Procedia Computer Science, vol. 227, pp. 425–432, 2024. doi: 10.1016/j.procs.2023.12.053. [7] S. Yang, Y. Gao, Y. Zhu, L. Zhang, Q. Xie, X. Lu, F. Wang, and Z. Zhang, “A Deep Learning Approach to Stress Recognition Through Multimodal Physiological Signal Image Transformation,” Scientific Reports, vol. 15, no. 1, p. 9812, 2025. doi: 10.1038/s41598-025-62130-1. [8] Bhatti, P. Angkan, B. Behinaein, Z. Mahmud, D. Rodenburg, H. Braund, P. J. Mclellan, A. Ruberto, G. Harrison, D. Wilson, A. Szulewski, D. Howes, A. Etemad, and P. Hungler, “CLARE: Cognitive Load Assessment in Real Time with Multimodal Data,” arXiv preprint, arXiv:2404.19218, Apr. 2024. doi: 10.48550/arXiv.2404.19218. [9] E. Oliver and S. Dakshit, “Cross-Modality Investigation on WESAD Stress Classification,” arXiv preprint, arXiv:2502.18733, Feb. 2025. doi: 10.48550/arXiv.2502.18733. [10] T. Islam and P. Washington, “Individualized Stress Mobile Sensing Using Self-Supervised Pre-Training,” Applied Sciences, vol. 13, no. 2, p. 987, 2023. doi: 10.3390/app13020987. [10] R. S. Lazarus and S. Folkman, Stress, Appraisal, and Coping. New York, NY: Springer Publishing Company, 1984. ISBN: 978-0826141910. [11] B. S. McEwen, “Stress, adaptation, and disease: Allostasis and allostatic load,” Annals of the New York Academy of Sciences, vol. 840, no. 1, pp. 33–44, 1998. doi: 10.1111/j.1749-6632.1998.tb09546.x. [12] J. A. Healey and R. W. Picard, “Detecting stress during real-world driving tasks using physiological sensors,” IEEE Transactions on Intelligent Transportation Systems, vol. 6, no. 2, pp. 156–166, 2005. doi: 10.1109/TITS.2005.848368. [13] P. Karthikeyan, M. Murugappan, and S. Yaacob, “Detection of human stress using short-term ECG and HRV signals,” Journal of Mechanics in Medicine and Biology, vol. 13, no. 4, p. 1350038, 2013. doi: 10.1142/S0219519413500382. [15] W. Boucsein, Electrodermal Activity, 2nd ed. New York, NY: Springer, 2012. ISBN: 978-1-4614-1125-3. [14] Khosrowabadi, C. Quek, K. L. Ang, and L. W. S. Wahab, “EEG-based emotion recognition using self-organizing fuzzy neural networks,” in Proceedings of the International Joint Conference on Neural Networks (IJCNN), Barcelona, Spain, Jul. 2010, pp. 1–8. doi: 10.1109/IJCNN.2010.5596675. [15] S. McEwen, “Physiology and neurobiology of stress and adaptation: central role of the brain,” Physiological Reviews, vol. 87, no. 3, pp. 873–904, 2007. doi: 10.1152/physrev.00041.2006. [16] J. S. Lee, M. Black, and D. Tull, “Stress detection using speech features and machine learning techniques,” International Journal of Advanced Computer Science and Applications (IJACSA), vol. 6, no. 2, pp. 1–7, 2015. doi: 10.14569/IJACSA.2015.060201. [17] M. Pantic and L. J. M. Rothkrantz, “Automatic analysis of facial expressions: The state of the art,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no. 12, pp. 1424–1445, 2000. doi: 10.1109/34.895976. [18] Y. Zhang, Y. Yin, H. Chen, and J. Niu, “Multimodal stress detection in real-world driving tasks using physiological signals and facial expressions,” IEEE Transactions on Affective Computing, vol. 13, no. 2, pp. 814–827, 2022. doi:10.1109/TAFFC.2020.2996120. [19] R. Subramanian, J. Wache, M. Abadi, R. Vieriu, S. Winkler, and N. Sebe, “ASCERTAIN: Emotion and personality recognition using commercial sensors,” IEEE Transactions on Affective Computing, vol. 9, no. 2, pp. 147–160, 2018. doi: 10.1109/TAFFC.2016.2625250. [20] H. Gjoreski, M. Luštrek, M. Gams, and M. Gjoreski, “Monitoring stress with a wrist device using context,” Journal of Biomedical Informatics, vol. 73, pp. 159–170, 2017. doi: 10.1016/j.jbi.2017.08.006. [21] Sano and R. W. Picard, “Stress recognition using wearable sensors and mobile phones,” in Proc. IEEE Conference on Affective Computing and Intelligent Interaction (ACII), Geneva, Switzerland, Sep. 2013, pp. 671–676. doi: 10.1109/ACII.2013.117. [22] G. Sun, D. Wu, S. Zhao, and C. He, “Deep learning-based stress recognition using multimodal data,” IEEE Transactions on Affective Computing, early access, pp. 1–14, 2021. doi: 10.1109/TAFFC.2021.3102242. [23] K. Al-Shargie, T. B. Tang, and M. Kiguchi, “Stress assessment based on decision fusion of EEG and fNIRS signals,” IEEE Access, vol. 5, pp. 19889–19896, 2017. doi: 10.1109/ACCESS.2017.2754325. [24] K. Al-Shargie, T. B. Tang, and M. Kiguchi, “Stress assessment based on decision fusion of EEG and fNIRS signals,” IEEE Access, vol. 5, pp. 19889–19896, 2017. doi: 10.1109/ACCESS.2017.2754325. [25] S. Choi, M. Lee, and J. Kim, “Stress detection using deep learning-based physiological signal analysis,” Sensors, vol. 21, no. 7, p. 2337, 2021. doi: 10.3390/s21072337. [26] L. Shu, J. Xie, M. Yang, Z. Li, Y. Li, and X. Xu, “A review of emotion recognition using physiological signals,” Sensors, vol. 18, no. 7, p. 2074, 2018. doi: 10.3390/s18072074. [27] M. P. Tarafdar, A. Sharan, and R. Dey, “Deep CNN-LSTM model for stress detection using ECG signals,” Biomedical Signal Processing and Control, vol. 69, p. 102901, 2021. doi: 10.1016/j.bspc.2021.102901. [28] H. Kwon, H. Y. Kim, and Y. H. Kim, “Stress detection with multi-modal deep learning using EEG and physiological data,” in Proc. IEEE Int. Conf. on Bioinformatics and Biomedicine (BIBM), Seoul, Korea, 2020, pp. 2383–2389. doi: 10.1109/BIBM49941.2020.9313463. [29] F. Zhang, S. Chen, Y. Chen, and W. Xu, “A hybrid deep learning model for stress detection using multimodal physiological signals,” IEEE Sensors Journal, vol. 21, no. 16, pp. 17619–17629, 2021. doi: 10.1109/JSEN.2021.3077634. [30] K. Lin, S. Chen, X. Ma, and Y. Chen, “An ensemble deep learning approach for stress detection based on multimodal physiological signals,” IEEE Access, vol. 8, pp. 104965–104974, 2020. doi: 10.1109/ACCESS.2020.2999942. [31] S. D. Kreibig, “Autonomic nervous system activity in emotion: A review,” Biological Psychology, vol. 84, no. 3, pp. 394– 421, 2010. doi: 10.1016/j.biopsycho.2010.03.010. [32] J. Kim, E. Andre, and T. Vogt, “Emotional stress recognition using multimodal data and machine learning,” in Proc. Int. Conf. on Affective Computing and Intelligent Interaction (ACII), Lisbon, Portugal, 2007, pp. 150–157. doi: 10.1109/ACII.2007.33. [33] K. S. Qu, A. L. Chan, and R. W. Picard, “A machine learning approach for stress detection from physiological signals,” IEEE Transactions on Affective Computing, vol. 11, no. 2, pp. 221–234, 2020. doi: 10.1109/TAFFC.2018.2887269. [34] H. Caliskan, D. Erdogmus, and H. Adeli, “A multimodal deep learning framework for stress recognition using EEG, ECG, and EDA signals,” IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 28, no. 3, pp. 583– 593, 2020. doi: 10.1109/TNSRE.2020.2966390. [35] T. Healey and R. W. Picard, “Detecting stress during real-world driving tasks using physiological sensors,” IEEE Transactions on Intelligent Transportation Systems, vol. 6, no. 2, pp. 156–166, 2005. doi: 10.1109/TITS.2005.848368. [36] H. Hosseini, B. Luo, and A. R. Tamma, “Deep convolutional neural network for stress detection from ECG signals,” in Proc. IEEE EMBS Int. Conf. on Biomedical & Health Informatics (BHI), Las Vegas, NV, USA, 2018, pp. 342–345. doi: 10.1109/BHI.2018.8333452. [37] N. Keshan, P. Parimi, and V. Bichindaritz, “Machine learning for stress detection from ECG signals in automobile drivers,” in Proc. IEEE Int. Conf. on Big Data, Santa Clara, CA, USA, 2015, pp. 2661–2669. doi: 10.1109/BigData.2015.7364103. [38] G. Sannino and G. De Pietro, “A deep learning approach for ECG-based stress detection,” Computers in Biology and Medicine, vol. 118, p. 103753, 2020. doi: 10.1016/j.compbiomed.2020.103753. [39] S. Choi and J. Kim, “Stress classification using deep learning models with photoplethysmography signals,” Sensors, vol. 18, no. 12, p. 4822, 2018. doi: 10.3390/s18124822. [40] K. Al-Shargie, H. M. Tariq, T. B. Tang, and M. Kiguchi, “Stress detection using EEG and hybrid deep learning,” in Proc. IEEE Int. Conf. on Signal and Image Processing Applications (ICSIPA), Kuala Lumpur, Malaysia, 2017, pp. 90–95. doi: 10.1109/ICSIPA.2017.8120572. [41] J. Healey and R. W. Picard, “SmartCar: Stress monitoring and detection in the automobile,” IBM Systems Journal, vol. 42, no. 1, pp. 85–96, 2003. doi: 10.1147/sj.421.0085. [42] M. Mohino-Herranz, N. Gil-Pita, J. Ferreira, V. Rosa-Zurera, and F. Seoane, “Assessment of mental, emotional and physical stress through analysis of physiological signals using smartphones,” Sensors, vol. 15, no. 10, pp. 25607–25627, 2015. doi: 10.3390/s151025607. [43] M. Gjoreski, H. Gjoreski, M. Luštrek, and M. Gams, “Automatic detection of perceived stress using smartphones,” in Proc. IEEE Int. Conf. on Pervasive Computing and Communications Workshops (PerCom Workshops), Athens, Greece, 2015,pp. 1–6. doi: 10.1109/PERCOMW.2015.7134003. [44] P. Karthikeyan, A. Vaithiyanathan, and B. Rajasekaran, “Deep learning-based stress detection using wearable sensors in IoT environment,” Measurement, vol. 189, p. 110463, 2022. doi: 10.1016/j.measurement.2021.110463. [45] Y. Li, H. Chen, and Z. Zhao, “Hybrid CNN-LSTM model for stress detection using physiological signals,” IEEE Access, vol. 8, pp. 32835–32845, 2020. doi: 10.1109/ACCESS.2020.2974315. [46] T. Zhang, H. Zheng, J. He, and J. Chen, “A multimodal stress monitoring system based on machine learning,” IEEE Access, vol. 7, pp. 132742–132754, 2019. doi: 10.1109/ACCESS.2019.2940668. [47] M. Sarker, A. Hossain, and M. Rahman, “Wearable stress monitoring using deep learning and multimodal fusion,” in Proc. IEEE Int. Conf. on Pervasive Computing and Communications (PerCom), Austin, TX, USA, 2020, pp. 1–10. doi: 10.1109/PerCom45495.2020.9127463. [48] Y. Ding, Y. Yang, Y. Guo, and F. Xu, “Real-time stress monitoring using deep neural networks and multimodal sensor data,” IEEE Transactions on Instrumentation and Measurement, vol. 70, pp. 1–12, 2021. doi: 10.1109/TIM.2021.3064428. [49] H. Ali, J. Choi, and T. Kim, “Stress detection using attention-based LSTM with multimodal data,” Applied Sciences, vol. 11, no. 12, p. 5343, 2021. doi: 10.3390/app11125343. [50] Z. Li, W. Li, and J. Wang, “Multimodal stress detection using graph neural networks,” IEEE Transactions on Affective Computing, early access, pp. 1–12, 2022. doi: 10.1109/TAFFC.2022.3154882. [51] K. Kalimeri, A. Saitis, and M. Rossi, “Predicting stress in the workplace using multimodal physiological data and machine learning,” IEEE Transactions on Affective Computing, vol. 12, no. 3, pp. 569–582, 2021. doi: 10.1109/TAFFC.2019.2943564. [52] H. Wang, H. Zhang, and Y. Wang, “Stress detection from multimodal data using ensemble learning,” Information Fusion, vol. 77, pp. 40–53, 2022. doi: 10.1016/j.inffus.2021.08.007. [53] M. A. T. Fikri, R. Atmaja, and T. Ozawa, “Emotion and stress recognition from multimodal physiological signals using deep residual networks,” Sensors, vol. 20, no. 20, p. 5883, 2020. doi: 10.3390/s20205883. [54] S. Jebelli, Y. Hwang, and S. Lee, “Stress recognition using multimodal physiological signals with convolutional neural networks,” Automation in Construction, vol. 93, pp. 79–90, 2018. doi: 10.1016/j.autcon.2018.05.027. [55] Schmidt, S. Palmiero, and G. Russo, “Multimodal stress detection using wearable devices and deep learning,” IEEE Access, vol. 9, pp. 140185–140198, 2021. doi: 10.1109/ACCESS.2021.3118823. [56] S. Lee, K. Lee, and M. Kim, “Attention-based hybrid CNN–LSTM network for stress detection from multimodal physiological signals,” Sensors, vol. 20, no. 22, p. 6362, 2020. doi: 10.3390/s20226362. [57] R. D. Calvo and S. D’Mello, “Affect detection: An interdisciplinary review of models, methods, and their applications,” IEEE Transactions on Affective Computing, vol. 1, no. 1, pp. 18–37, 2010. doi: 10.1109/T-AFFC.2010.1. [58] P. Karthikeyan, M. Murugappan, and S. Yaacob, “ECG signal-based mental stress assessment using wavelet transform,” in [59] Proc. IEEE Int. Conf. on Signal and Image Processing Applications (ICSIPA), Kuala Lumpur, Malaysia, 2011, pp. 420–425. doi: 10.1109/ICSIPA.2011.6144161. [60] M. Subhani, M. Mumtaz, M. Saad, and K. Malik, “EEG-based stress detection using wavelet transform and machine learning,” Australasian Physical & Engineering Sciences in Medicine, vol. 40, no. 4, pp. 821–832, 2017. doi: 10.1007/s13246017-0587-9. [61] X. Chen, X. Wang, and L. Zhang, “A deep learning approach for multimodal stress recognition,” Neurocomputing, vol. 453, pp. 156–166, 2021. doi: 10.1016/j.neucom.2020.06.124. [62] Y. Zhang, H. Yin, H. Chen, and J. Niu, “Multimodal stress detection in real-world driving tasks using physiological signals and facial expressions,” IEEE Transactions on Affective Computing, vol. 13, no. 2, pp. 814–827, 2022. doi: 10.1109/TAFFC.2019.2934421. [63] L. Can, J. Wu, and C. Fu, “Stress recognition using multimodal data and deep learning,” IEEE Access, vol. 8, pp. 63622– 63634, 2020. doi: 10.1109/ACCESS.2020.2983892. [64] H. Liu, Q. Xu, and Y. Zhang, “Wearable stress detection using hybrid deep learning on multimodal biosignals,” Expert Systems with Applications, vol. 178, p. 115002, 2021. doi: 10.1016/j.eswa.2021.115002. [65] E. Morales, J. Fernandez, and M. Vázquez, “Context-aware multimodal stress detection using wearable sensors and smartphones,” Sensors, vol. 20, no. 18, p. 5246, 2020. doi: 10.3390/s20185246. [66] J. Lin, C. Pan, and F. Lin, “Stress detection from EEG and PPG signals using convolutional neural networks,” Biomedical Signal Processing and Control, vol. 68, p. 102656, 2021. doi: 10.1016/j.bspc.2021.102656. [67] P. Gupta, R. Malhotra, and P. Singh, “Ensemble deep learning framework for stress detection using wearable physiological sensors,”IEEESensors Journal, vol. 21, no. 22, pp. 25954–25963, 2021. doi: 10.1109/JSEN.2021.3097501. [68] Z. Zhang, Y. Wang, and J. Zhang, “Personalized stress monitoring using multimodal wearable sensors and adaptive deep learning,” Information Fusion, vol. 76, pp. 110–120, 2021. doi: 10.1016/j.inffus.2021.05.007. [69] T. Li, L. Zhao, and S. Ma, “Transformer-based deep learning model for multimodal stress detection,” IEEE Transactions on Neural Networks and Learning Systems, early access, 2022. doi: 10.1109/TNNLS.2022.3142529. [70] R. Alshurafa, H. Yoon, and W. Xu, “Smartphone and wearable sensor-based stress detection using multimodal deep learning,” ACM Transactions on Computing for Healthcare, vol. 2, no. 3, pp. 1–25, 2021. doi: 10.1145/3446393. [71] [X. Wang, J. Gao, and P. Liu, “Stress detection in online learning using facial expression and physiological signals,” IEEE Transactions on Learning Technologies, vol. 15, no. 2, pp. 177–189, 2022. doi: 10.1109/TLT.2021.3110027. [72] H. Xu, F. Jiang, and M. Xu, “Deep multimodal fusion for stress detection using physiological and behavioral signals,” IEEE Access, vol. 9, pp. 109462–109474, 2021. doi: 10.1109/ACCESS.2021.3098765. [73] L. Yu, J. He, and J. Chen, “Federated learning for privacy-preserving stress detection,” IEEE Internet of Things Journal, vol. 9, no. 12, pp. 9310–9321, 2022. doi: 10.1109/JIOT.2021.3112338. [74] P. Banerjee, S. Bhattacharya, and A. Dey, “A novel multimodal framework for stress detection in the workplace,” Journal of Biomedical Informatics, vol. 122, p. 103887, 2021. doi: 10.1016/j.jbi.2021.103887.
Copyright © 2025 Deeksha ., Dr. Salma Firdose. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Paper Id : IJRASET74109
Publish Date : 2025-09-06
ISSN : 2321-9653
Publisher Name : IJRASET
DOI Link : Click Here